131 research outputs found

    An automated method for analysis of microcirculation videos for accurate assessment of tissue perfusion

    Get PDF
    Background Imaging of the human microcirculation in real-time has the potential to detect injuries and illnesses that disturb the microcirculation at earlier stages and may improve the efficacy of resuscitation. Despite advanced imaging techniques to monitor the microcirculation, there are currently no tools for the near real-time analysis of the videos produced by these imaging systems. An automated system tool that can extract microvasculature information and monitor changes in tissue perfusion quantitatively might be invaluable as a diagnostic and therapeutic endpoint for resuscitation. Methods The experimental algorithm automatically extracts microvascular network and quantitatively measures changes in the microcirculation. There are two main parts in the algorithm: video processing and vessel segmentation. Microcirculatory videos are first stabilized in a video processing step to remove motion artifacts. In the vessel segmentation process, the microvascular network is extracted using multiple level thresholding and pixel verification techniques. Threshold levels are selected using histogram information of a set of training video recordings. Pixel-by-pixel differences are calculated throughout the frames to identify active blood vessels and capillaries with flow. Results Sublingual microcirculatory videos are recorded from anesthetized swine at baseline and during hemorrhage using a hand-held Side-stream Dark Field (SDF) imaging device to track changes in the microvasculature during hemorrhage. Automatically segmented vessels in the recordings are analyzed visually and the functional capillary density (FCD) values calculated by the algorithm are compared for both health baseline and hemorrhagic conditions. These results were compared to independently made FCD measurements using a well-known semi-automated method. Results of the fully automated algorithm demonstrated a significant decrease of FCD values. Similar, but more variable FCD values were calculated using a commercially available software program requiring manual editing. Conclusions An entirely automated system for analyzing microcirculation videos to reduce human interaction and computation time is developed. The algorithm successfully stabilizes video recordings, segments blood vessels, identifies vessels without flow and calculates FCD in a fully automated process. The automated process provides an equal or better separation between healthy and hemorrhagic FCD values compared to currently available semi-automatic techniques. The proposed method shows promise for the quantitative measurement of changes occurring in microcirculation during injury

    Biomedical Signal and Image Processing

    Get PDF
    Written for senior-level and first year graduate students in biomedical signal and image processing, this book describes fundamental signal and image processing techniques that are used to process biomedical information. The book also discusses application of these techniques in the processing of some of the main biomedical signals and images, such as EEG, ECG, MRI, and CT. New features of this edition include the technical updating of each chapter along with the addition of many more examples, the majority of which are MATLAB based

    Biomedical Signal and Image Processing

    Get PDF
    First published in 2005, Biomedical Signal and Image Processing received wide and welcome reception from universities and industry research institutions alike, offering detailed, yet accessible information at the reference, upper undergraduate, and first year graduate level. Retaining all of the quality and precision of the first edition, Biomedical Signal and Image Processing, Second Edition offers a number of revisions and improvements to provide the most up-to-date reference available on the fundamental signal and image processing techniques that are used to process biomedical information. Addressing the application of standard and novel processing techniques to some of today’s principle biomedical signals and images over three sections, the book begins with an introduction to digital signal and image processing, including Fourier transform, image filtering, edge detection, and wavelet transform. The second section investigates specifically biomedical signals, such as ECG, EEG, and EMG, while the third focuses on imaging using CT, X-Ray, MRI, ultrasound, positron, and other biomedical imaging techniques. Updated and expanded, Biomedical Signal and Image Processing, Second Edition offers numerous additional, predominantly MATLAB, examples to all chapters to illustrate the concepts described in the text and ensure a complete understanding of the material. The author takes great care to clarify ambiguities in some mathematical equations and to further explain and justify the more complex signal and image processing concepts to offer a complete and understandable approach to complicated concepts

    Homomorphic Encryption for Machine Learning in Medicine and Bioinformatics

    Get PDF
    Machine learning techniques are an excellent tool for the medical community to analyzing large amounts of medical and genomic data. On the other hand, ethical concerns and privacy regulations prevent the free sharing of this data. Encryption methods such as fully homomorphic encryption (FHE) provide a method evaluate over encrypted data. Using FHE, machine learning models such as deep learning, decision trees, and naive Bayes have been implemented for private prediction using medical data. FHE has also been shown to enable secure genomic algorithms, such as paternity testing, and secure application of genome-wide association studies. This survey provides an overview of fully homomorphic encryption and its applications in medicine and bioinformatics. The high-level concepts behind FHE and its history are introduced. Details on current open-source implementations are provided, as is the state of FHE for privacy-preserving techniques in machine learning and bioinformatics and future growth opportunities for FHE

    Tensor Denoising via Amplification and Stable Rank Methods

    Full text link
    Tensors in the form of multilinear arrays are ubiquitous in data science applications. Captured real-world data, including video, hyperspectral images, and discretized physical systems, naturally occur as tensors and often come with attendant noise. Under the additive noise model and with the assumption that the underlying clean tensor has low rank, many denoising methods have been created that utilize tensor decomposition to effect denoising through low rank tensor approximation. However, all such decomposition methods require estimating the tensor rank, or related measures such as the tensor spectral and nuclear norms, all of which are NP-hard problems. In this work we leverage our previously developed framework of tensor amplification\textit{tensor amplification}, which provides good approximations of the spectral and nuclear tensor norms, to denoising synthetic tensors of various sizes, ranks, and noise levels, along with real-world tensors derived from physiological signals. We also introduce two new notions of tensor rank -- stable slice rank\textit{stable slice rank} and stable \textit{stable }XX-rank\textit{-rank} -- and new denoising methods based on their estimation. The experimental results show that in the low rank context, tensor-based amplification provides comparable denoising performance in high signal-to-noise ratio (SNR) settings and superior performance in noisy (i.e., low SNR) settings, while the stable XX-rank method achieves superior denoising performance on the physiological signal data

    A Hierarchical Method Based on Active Shape Models and Directed Hough Transform for Segmentation of Noisy Biomedical Images; Application in Segmentation of Pelvic X-ray Images

    Get PDF
    Background Traumatic pelvic injuries are often associated with severe, life-threatening hemorrhage, and immediate medical treatment is therefore vital. However, patient prognosis depends heavily on the type, location and severity of the bone fracture, and the complexity of the pelvic structure presents diagnostic challenges. Automated fracture detection from initial patient X-ray images can assist physicians in rapid diagnosis and treatment, and a first and crucial step of such a method is to segment key bone structures within the pelvis; these structures can then be analyzed for specific fracture characteristics. Active Shape Model has been applied for this task in other bone structures but requires manual initialization by the user. This paper describes a algorithm for automatic initialization and segmentation of key pelvic structures - the iliac crests, pelvic ring, left and right pubis and femurs - using a hierarchical approach that combines directed Hough transform and Active Shape Models. Results Performance of the automated algorithm is compared with results obtained via manual initialization. An error measures is calculated based on the shapes detected with each method and the gold standard shapes. ANOVA results on these error measures show that the automated algorithm performs at least as well as the manual method. Visual inspection by two radiologists and one trauma surgeon also indicates generally accurate performance. Conclusion The hierarchical algorithm described in this paper automatically detects and segments key structures from pelvic X-rays. Unlike various other x-ray segmentation methods, it does not require manual initialization or input. Moreover, it handles the inconsistencies between x-ray images in a clinical environment and performs successfully in the presence of fracture. This method and the segmentation results provide a valuable base for future work in fracture detection

    Brain mapping and detection of functional patterns in fMRI using wavelet transform; application in detection of dyslexia

    Get PDF
    Background Functional Magnetic Resonance Imaging (fMRI) has been proven to be useful for studying brain functions. However, due to the existence of noise and distortion, mapping between the fMRI signal and the actual neural activity is difficult. Because of the difficulty, differential pattern analysis of fMRI brain images for healthy and diseased cases is regarded as an important research topic. From fMRI scans, increased blood ows can be identified as activated brain regions. Also, based on the multi-sliced images of the volume data, fMRI provides the functional information for detecting and analyzing different parts of the brain. Methods In this paper, the capability of a hierarchical method that performed an optimization algorithm based on modified maximum model (MCM) in our previous study is evaluated. The optimization algorithm is designed by adopting modified maximum correlation model (MCM) to detect active regions that contain significant responses. Specifically, in the study, the optimization algorithm is examined based on two groups of datasets, dyslexia and healthy subjects to verify the ability of the algorithm that enhances the quality of signal activities in the interested regions of the brain. After verifying the algorithm, discrete wavelet transform (DWT) is applied to identify the difference between healthy and dyslexia subjects. Results We successfully showed that our optimization algorithm improves the fMRI signal activity for both healthy and dyslexia subjects. In addition, we found that DWT based features can identify the difference between healthy and dyslexia subjects. Conclusion The results of this study provide insights of associations of functional abnormalities in dyslexic subjects that may be helpful for neurobiological identification from healthy subject

    Non-linear dynamical signal characterization for prediction of defibrillation success through machine learning

    Full text link
    Abstract Background Ventricular Fibrillation (VF) is a common presenting dysrhythmia in the setting of cardiac arrest whose main treatment is defibrillation through direct current countershock to achieve return of spontaneous circulation. However, often defibrillation is unsuccessful and may even lead to the transition of VF to more nefarious rhythms such as asystole or pulseless electrical activity. Multiple methods have been proposed for predicting defibrillation success based on examination of the VF waveform. To date, however, no analytical technique has been widely accepted. We developed a unique approach of computational VF waveform analysis, with and without addition of the signal of end-tidal carbon dioxide (PetCO2), using advanced machine learning algorithms. We compare these results with those obtained using the Amplitude Spectral Area (AMSA) technique. Methods A total of 90 pre-countershock ECG signals were analyzed form an accessible preshosptial cardiac arrest database. A unified predictive model, based on signal processing and machine learning, was developed with time-series and dual-tree complex wavelet transform features. Upon selection of correlated variables, a parametrically optimized support vector machine (SVM) model was trained for predicting outcomes on the test sets. Training and testing was performed with nested 10-fold cross validation and 6–10 features for each test fold. Results The integrative model performs real-time, short-term (7.8 second) analysis of the Electrocardiogram (ECG). For a total of 90 signals, 34 successful and 56 unsuccessful defibrillations were classified with an average Accuracy and Receiver Operator Characteristic (ROC) Area Under the Curve (AUC) of 82.2% and 85%, respectively. Incorporation of the end-tidal carbon dioxide signal boosted Accuracy and ROC AUC to 83.3% and 93.8%, respectively, for a smaller dataset containing 48 signals. VF analysis using AMSA resulted in accuracy and ROC AUC of 64.6% and 60.9%, respectively. Conclusion We report the development and first-use of a nontraditional non-linear method of analyzing the VF ECG signal, yielding high predictive accuracies of defibrillation success. Furthermore, incorporation of features from the PetCO2 signal noticeably increased model robustness. These predictive capabilities should further improve with the availability of a larger database.http://deepblue.lib.umich.edu/bitstream/2027.42/112730/1/12911_2012_Article_558.pd
    • …
    corecore